Search Results: "pb"

4 August 2023

Louis-Philippe V ronneau: pymonitair: Air Quality Monitoring Display with MicroPython

I've never been a fan of IoT devices for obvious reasons: not only do they tend to be excellent at being expensive vendor locked-in machines, but far too often, they also end up turning into e-waste after a short amount of time. Manufacturers can go out of business or simply decide to shut down the cloud servers for older models, and then you're stuck with a brick. Well, this all changes today, as I've built my first IoT device and I love it. Introducing pymonitair. What pymonitair is a MicroPython project that aims to display weather data from a home weather station (like the ones sold by AirGradient) on a small display. The source code was written for the Raspberry Pi Pico W, the Waveshare Pico OLED 1.3 display and the RevolvAir Revo 1 weather station, but can be adapted to other displays and stations easily, as I tried to keep the code as modular as possible. The general MicroPython code itself isn't specific to the Raspberry Pi Pico and shouldn't need to be modified for other boards. pymonitair features: Here's a demo of me scrolling through the different pages and (somewhat failing) to turn the screen on and off: Why? If you follow my blog, you'll know that my last entry was about building a set of tools to collect and graph data from a weather station my neighbor set up. Why on Earth would I need a separate device to show this data, when the website I've built works perfectly fine and is accessible on any computer or smartphone? Mostly alerts. When the air quality here dropped following forest fires, I found out keeping track of if I had to close my windows and bunker down was quite a hassle. Air quality would degrade during the day and I would only notice it hours later. With the pymonitair, I'll have a little screen flashing angrily at me whenever this happens. A simpler solution would probably have been to forgo hardware altogether and code some icinga2 alert to ping me over Signal whenever the air quality got bad. Hacking on pymonitair was mostly a way to learn to use MicroPython and familiarize myself with this type of embedded hardware device. I'll surely blog about this later this year, but I plan to use a very similar stack to mod my apartment's HVAC unit to stop pulling air from outside when an air quality sensor detects cigarette smoke (or bad air quality in general). Things I've learnt This project was super fun and taught me many things:

  1. PM1, PM2.5, PM10, Temperature, Humidity and Pressure
  2. Part of the screen will flash repeatedly
  3. I did look for other solutions to transfer files to the board, but none of them were actually maintained. I nearly finished packaging ampy before realising it was officially unmaintained and its main alternative, rshell, has had its last release in December 2021. When I caught myself seriously considering writing a script to transfer files over the serial link, I gave up and decided thonny was not that bad after all.

2 August 2023

Debian Brasil: Participa o do Debian na Campus Party Brasil 2023

Mais uma edi o da Campus Party Brasil aconteceu na cidade de S o Paulo entre os dias 25 e 30 de Julho de 2023. Novamente a comunidade Debian Brasil se fez presente. Durante os dias no espa o disponibilizado, realizamos algumas atividades:
- Distribui o de brindes (adesivos, copos, cord o de crach );
- Mini oficina sobre como contribuir para a equipe de tradu o;
- Mini oficina sobre empacotamento;
- Assinatura de chaves;
- Informa es sobre o projeto; Durante todos os dias, havia sempre uma pessoa dispon vel para passar informa es sobre o que o Debian e as diversas formas de contribuir. Durante todo o evento, estimamos que ao menos 700 pessoas interagiram de alguma forma com nossa comunidade. Diversas pessoas, aproveitaram a oportunidade para aproveitar pelo excelente trabalho realizado pelo projeto no Debian 12 - Bookworm. Segue algumas fotos tiradas durante o evento! CPBR15
Espa o da Comunidade no Evento.
CPBR15
Romulo, visitante do espa o com Daniel Lenharo.
CPBR15
Alguns brindes que estavam a disposi o do p blico.
CPBR15
Vis o do espa o.
CPBR15
Adesivo com a Arte de 30 anos feita pelo Jefferson.
CPBR15
Pessoal no espa o da comunidade.
CPBR15
Mini curso de empacotamento, realizado pelo Charles.
CPBR15
Pessoal que esteve envolvido nas atividades da comunidade.

Debian Brasil: Debian Brazil at Campus Party Brazil 2023

Another edition of Campus Party Brasil took place in the city of S o Paulo between the 25th and 30th of July 2023. One more time the Debian Brazil Community was present. During the days in the available space, we carry out some activities:
- Gifts for attends (stickers, cups, lanyards);
- Workshop on how to contribute to the translation team;
- Workshop on packaging;
- Key signing party;
- Information about the project; Every day, there was always someone available to pass on information about what Debian is and the different ways to contribute. During the entire event, we estimate that at least 700 people interacted in some way with our community. Several people took the opportunity to benefit from the excellent work done by the project on Debian 12 - Bookworm. Here are some photos taken during the event! CPBR15
Community Space at the Event.
CPBR15
Romulo, visitor from space with Daniel Lenharo.
CPBR15
Some gifts for attends.
CPBR15
Communit Space at the event.
CPBR15
Sticker made with Jefferson artwork.
CPBR15
People in the community space.
CPBR15
Packaging workshop, held by Charles.
CPBR15
people who were involved in the Community activities.

1 August 2023

Debian Brasil: Participa o do Debian na Campus Party Brasil 2023

Mais uma edi o da Campus Party Brasil aconteceu na cidade de S o Paulo entre os dias 25 e 30 de Julho de 2023. Novamente a comunidade Brasileira se fez presente. Durante os dias no espa o disponibilizado, realizamos algumas atividades:
- Distribui o de brindes (Adesivos, Copos, Cord o de crach );
- Mini oficina sobre como contribuir para a equipe de tradu o;
- Mini oficina sobre empacotamento;
- Informa es sobre o projeto; Durante todos os dias, havia sempre uma pessoa dispon vel para passar informa es sobre o que o Debian, formas de contribuir. Durante todo o evento, estimamos que ao menos 700 pessoas interagiram de alguma forma com nossa comunidade. Diversas pessoas, aproveitaram a oportunidade para aproveitar pelo excelente trabalho realizado pelo projeto no Debian 12 - Bookworm. Segue algumas fotos tiradas durante o evento! CPBR15
Espa o da Comunidade no Evento.
CPBR15
Romulo, visitante do espa o com Daniel Lenharo.
CPBR15
Alguns brindes que estavam a disposi o do p blico.
CPBR15
Vis o do espa o.
CPBR15
Adesivo com a Arte de 30 anos feita pelo Jefferson.
CPBR15
Pessoal no espa o da comunidade.
CPBR15
Mini curso de empacotamento, realizado pelo Charles.
CPBR15
Pessoal que esteve envolvido nas atividades da comunidade.

26 July 2023

Enrico Zini: Mysterious DNS issues

Uhm, salsa is not resolving:
$ git fetch
ssh: Could not resolve hostname salsa.debian.org: Name or service not known
fatal: Could not read from remote repository.
$ ping salsa.debian.org
ping: salsa.debian.org: Name or service not known
But... it is?
$ host salsa.debian.org
salsa.debian.org has address 209.87.16.44
salsa.debian.org has IPv6 address 2607:f8f0:614:1::1274:44
salsa.debian.org mail is handled by 10 mailly.debian.org.
salsa.debian.org mail is handled by 10 mitropoulos.debian.org.
salsa.debian.org mail is handled by 10 muffat.debian.org.
It really is resolving correctly at each step:
$ cat /etc/resolv.conf
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
# [...]
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
# [...]
nameserver 127.0.0.53
options edns0 trust-ad
search fritz.box
$ host salsa.debian.org 127.0.0.53
Using domain server:
Name: 127.0.0.53
Address: 127.0.0.53#53
Aliases:
salsa.debian.org has address 209.87.16.44
salsa.debian.org has IPv6 address 2607:f8f0:614:1::1274:44
salsa.debian.org mail is handled by 10 mailly.debian.org.
salsa.debian.org mail is handled by 10 muffat.debian.org.
salsa.debian.org mail is handled by 10 mitropoulos.debian.org.
# resolvectl status
Global
       Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 3 (wlp108s0)
    Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
         Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 192.168.178.1
       DNS Servers: 192.168.178.1 fd00::3e37:12ff:fe99:2301 2a01:b600:6fed:1:3e37:12ff:fe99:2301
        DNS Domain: fritz.box
Link 4 (virbr0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 9 (enxace2d39ce693)
    Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
         Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 192.168.178.1
       DNS Servers: 192.168.178.1 fd00::3e37:12ff:fe99:2301 2a01:b600:6fed:1:3e37:12ff:fe99:2301
        DNS Domain: fritz.box
$ host salsa.debian.org 192.168.178.1
Using domain server:
Name: 192.168.178.1
Address: 192.168.178.1#53
Aliases:
salsa.debian.org has address 209.87.16.44
salsa.debian.org has IPv6 address 2607:f8f0:614:1::1274:44
salsa.debian.org mail is handled by 10 muffat.debian.org.
salsa.debian.org mail is handled by 10 mitropoulos.debian.org.
salsa.debian.org mail is handled by 10 mailly.debian.org.
$ host salsa.debian.org fd00::3e37:12ff:fe99:2301 2a01:b600:6fed:1:3e37:12ff:fe99:2301
Using domain server:
Name: fd00::3e37:12ff:fe99:2301
Address: fd00::3e37:12ff:fe99:2301#53
Aliases:
salsa.debian.org has address 209.87.16.44
salsa.debian.org has IPv6 address 2607:f8f0:614:1::1274:44
salsa.debian.org mail is handled by 10 muffat.debian.org.
salsa.debian.org mail is handled by 10 mitropoulos.debian.org.
salsa.debian.org mail is handled by 10 mailly.debian.org.
Could it be caching?
# systemctl restart systemd-resolved
$ dpkg -s nscd
dpkg-query: package 'nscd' is not installed and no information is available
$ git fetch
ssh: Could not resolve hostname salsa.debian.org: Name or service not known
fatal: Could not read from remote repository.
Could it be something in ssh's config?
$ grep salsa ~/.ssh/config
$ ssh git@salsa.debian.org
ssh: Could not resolve hostname salsa.debian.org: Name or service not known
Something weird with ssh's control sockets?
$ strace -fo /tmp/zz ssh git@salsa.debian.org
ssh: Could not resolve hostname salsa.debian.org: Name or service not known
enrico@ploma:~/lavori/legal/legal$ grep salsa /tmp/zz
393990 execve("/usr/bin/ssh", ["ssh", "git@salsa.debian.org"], 0x7ffffcfe42d8 /* 54 vars */) = 0
393990 connect(3,  sa_family=AF_UNIX, sun_path="/home/enrico/.ssh/sock/git@salsa.debian.org:22" , 110) = -1 ENOENT (No such file or directory)
$ strace -fo /tmp/zz1 ssh -S none git@salsa.debian.org
ssh: Could not resolve hostname salsa.debian.org: Name or service not known
$ grep salsa /tmp/zz1
394069 execve("/usr/bin/ssh", ["ssh", "-S", "none", "git@salsa.debian.org"], 0x7ffd36cbfde8 /* 54 vars */) = 0
How is ssh trying to resolve salsa.debian.org?
393990 socket(AF_UNIX, SOCK_STREAM SOCK_CLOEXEC SOCK_NONBLOCK, 0) = 3
393990 connect(3,  sa_family=AF_UNIX, sun_path="/run/systemd/resolve/io.systemd.Resolve" , 42) = 0
393990 sendto(3, " \"method\":\"io.systemd.Resolve.Re"..., 99, MSG_DONTWAIT MSG_NOSIGNAL, NULL, 0) = 99
393990 mmap(NULL, 135168, PROT_READ PROT_WRITE, MAP_PRIVATE MAP_ANONYMOUS, -1, 0) = 0x7f4fc71ca000
393990 recvfrom(3, 0x7f4fc71ca010, 135152, MSG_DONTWAIT, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
393990 ppoll([ fd=3, events=POLLIN ], 1,  tv_sec=119, tv_nsec=999917000 , NULL, 8) = 1 ([ fd=3, revents=POLLIN ], left  tv_sec=119, tv_nsec=998915689 )
393990 recvfrom(3, " \"error\":\"io.systemd.System\",\"pa"..., 135152, MSG_DONTWAIT, NULL, NULL) = 56
393990 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
393990 close(3)                         = 0
393990 munmap(0x7f4fc71ca000, 135168)   = 0
393990 getpid()                         = 393990
393990 write(2, "ssh: Could not resolve hostname "..., 77) = 77
Something weird with resolved?
$ resolvectl query salsa.debian.org
salsa.debian.org: resolve call failed: Lookup failed due to system error: Invalid argument
Let's try disrupting what ssh is trying and failing:
# mv /run/systemd/resolve/io.systemd.Resolve /run/systemd/resolve/io.systemd.Resolve.backup
$ strace -o /tmp/zz2 ssh -S none -vv git@salsa.debian.org
OpenSSH_9.2p1 Debian-2, OpenSSL 3.0.9 30 May 2023
debug1: Reading configuration data /home/enrico/.ssh/config
debug1: /home/enrico/.ssh/config line 1: Applying options for *
debug1: /home/enrico/.ssh/config line 228: Applying options for *.debian.org
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolving "salsa.debian.org" port 22
ssh: Could not resolve hostname salsa.debian.org: Name or service not known
$ tail /tmp/zz2
394748 prctl(PR_CAPBSET_READ, 0x29 /* CAP_??? */) = -1 EINVAL (Invalid argument)
394748 munmap(0x7f27af5ef000, 164622)   = 0
394748 rt_sigprocmask(SIG_BLOCK, [HUP USR1 USR2 PIPE ALRM CHLD TSTP URG VTALRM PROF WINCH IO], [], 8) = 0
394748 futex(0x7f27ae5feaec, FUTEX_WAKE_PRIVATE, 2147483647) = 0
394748 openat(AT_FDCWD, "/run/systemd/machines/salsa.debian.org", O_RDONLY O_CLOEXEC) = -1 ENOENT (No such file or directory)
394748 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
394748 getpid()                         = 394748
394748 write(2, "ssh: Could not resolve hostname "..., 77) = 77
394748 exit_group(255)                  = ?
394748 +++ exited with 255 +++
$ machinectl list
No machines.
# resolvectl flush-caches
$ resolvectl query salsa.debian.org
salsa.debian.org: resolve call failed: Lookup failed due to system error: Invalid argument
# resolvectl reset-statistics
$ resolvectl query salsa.debian.org
salsa.debian.org: resolve call failed: Lookup failed due to system error: Invalid argument
# resolvectl reset-server-features
$ resolvectl query salsa.debian.org
salsa.debian.org: resolve call failed: Lookup failed due to system error: Invalid argument
# resolvectl monitor
  Q: salsa.debian.org IN A
  Q: salsa.debian.org IN AAAA
  S: EINVAL
  A: debian.org IN NS sec2.rcode0.net
  A: debian.org IN NS sec1.rcode0.net
  A: debian.org IN NS nsp.dnsnode.net
  A: salsa.debian.org IN A 209.87.16.44
  A: debian.org IN NS dns4.easydns.info
I guess I won't be using salsa today, and I wish I understood why. Update: as soon as I pushed this post to my blog (via ssh) salsa started resolving again.

22 July 2023

Andrew Cater: 20230722 1804 UTC - All signed, all pushed, all done.

So it's there on Download on the Debian web site.Thanks to the folk who've done this point release testing. No new major bugs found: a couple of pre-existing ones may still be there.

Thanks very much indeed to the new people who have been in IRC on debian-cd, downloading, testing, editing the wiki page.

This release has gone very well indeed - I'll see some of the same folk at the BBQ in late August but otherwise we'll be back in September or so for the next point release for Bookworm (and probably a release for Bullseye as well).

Now trying various tricks with Raspberry Pi 4s to get them to boot with UEFI and the standard Debian installer.

11 July 2023

Simon Josefsson: Coping with non-free software in Debian

A personal reflection on how I moved from my Debian home to find two new homes with Trisquel and Guix for my own ethical computing, and while doing so settled my dilemma about further Debian contributions. Debian s contributions to the free software community has been tremendous. Debian was one of the early distributions in the 1990 s that combined the GNU tools (compiler, linker, shell, editor, and a set of Unix tools) with the Linux kernel and published a free software operating system. Back then there were little guidance on how to publish free software binaries, let alone entire operating systems. There was a lack of established community processes and conflict resolution mechanisms, and lack of guiding principles to motivate the work. The community building efforts that came about in parallel with the technical work has resulted in a steady flow of releases over the years. From the work of Richard Stallman and the Free Software Foundation (FSF) during the 1980 s and early 1990 s, there was at the time already an established definition of free software. Inspired by free software definition, and a belief that a social contract helps to build a community and resolve conflicts, Debian s social contract (DSC) with the free software community was published in 1997. The DSC included the Debian Free Software Guidelines (DFSG), which directly led to the Open Source Definition.

Slackware 3.5" disksOne of my earlier Slackware install disk sets, kept for nostalgic reasons.
I was introduced to GNU/Linux through Slackware in the early 1990 s (oh boy those nights calculating XFree86 modeline s and debugging sendmail.cf) and primarily used RedHat Linux during ca 1995-2003. I switched to Debian during the Woody release cycles, when the original RedHat Linux was abandoned and Fedora launched. It was Debian s explicit community processes and infrastructure that attracted me. The slow nature of community processes also kept me using RedHat for so long: centralized and dogmatic decision processes often produce quick and effective outcomes, and in my opinion RedHat Linux was technically better than Debian ca 1995-2003. However the RedHat model was not sustainable, and resulted in the RedHat vs Fedora split. Debian catched up, and reached technical stability once its community processes had been grounded. I started participating in the Debian community around late 2006. My interpretation of Debian s social contract is that Debian should be a distribution of works licensed 100% under a free license. The Debian community has always been inclusive towards non-free software, creating the contrib/non-free section and permitting use of the bug tracker to help resolve issues with non-free works. This is all explained in the social contract. There has always been a clear boundary between free and non-free work, and there has been a commitment that the Debian system itself would be 100% free. The concern that RedHat Linux was not 100% free software was not critical to me at the time: I primarily (and happily) ran GNU tools on Solaris, IRIX, AIX, OS/2, Windows etc. Running GNU tools on RedHat Linux was an improvement, and I hadn t realized it was possible to get rid of all non-free software on my own primary machine. Debian realized that goal for me. I ve been a believer in that model ever since. I can use Solaris, macOS, Android etc knowing that I have the option of using a 100% free Debian. While the inclusive approach towards non-free software invite and deserve criticism (some argue that being inclusive to non-inclusive behavior is a bad idea), I believe that Debian s approach was a successful survival technique: by being inclusive to and a compromise between free and non-free communities, Debian has been able to stay relevant and contribute to both environments. If Debian had not served and contributed to the free community, I believe free software people would have stopped contributing. If Debian had rejected non-free works completely, I don t think the successful Ubuntu distribution would have been based on Debian. I wrote the majority of the text above back in September 2022, intending to post it as a way to argue for my proposal to maintain the status quo within Debian. I didn t post it because I felt I was saying the obvious, and that the obvious do not need to be repeated, and the rest of the post was just me going down memory lane. The Debian project has been a sustainable producer of a 100% free OS up until Debian 11 bullseye. In the resolution on non-free firmware the community decided to leave the model that had resulted in a 100% free Debian for so long. The goal of Debian is no longer to publish a 100% free operating system, instead this was added: The Debian official media may include firmware . Indeed the Debian 12 bookworm release has confirmed that this would not only be an optional possibility. The Debian community could have published a 100% free Debian, in parallel with the non-free Debian, and still be consistent with their newly adopted policy, but chose not to. The result is that Debian s policies are not consistent with their actions. It doesn t make sense to claim that Debian is 100% free when the Debian installer contains non-free software. Actions speaks louder than words, so I m left reading the policies as well-intended prose that is no longer used for guidance, but for the peace of mind for people living in ivory towers. And to attract funding, I suppose. So how to deal with this, on a personal level? I did not have an answer to that back in October 2022 after the vote. It wasn t clear to me that I would ever want to contribute to Debian under the new social contract that promoted non-free software. I went on vacation from any Debian work. Meanwhile Debian 12 bookworm was released, confirming my fears. I kept coming back to this text, and my only take-away was that it would be unethical for me to use Debian on my machines. Letting actions speak for themselves, I switched to PureOS on my main laptop during October, barely noticing any difference since it is based on Debian 11 bullseye. Back in December, I bought a new laptop and tried Trisquel and Guix on it, as they promise a migration path towards ppc64el that PureOS do not. While I pondered how to approach my modest Debian contributions, I set out to learn Trisquel and gained trust in it. I migrated one Debian machine after another to Trisquel, and started to use Guix on others. Migration was easy because Trisquel is based on Ubuntu which is based on Debian. Using Guix has its challenges, but I enjoy its coherant documented environment. All of my essential self-hosted servers (VM hosts, DNS, e-mail, WWW, Nextcloud, CI/CD builders, backup etc) uses Trisquel or Guix now. I ve migrated many GitLab CI/CD rules to use Trisquel instead of Debian, to have a more ethical computing base for software development and deployment. I wish there were official Guix docker images around. Time has passed, and when I now think about any Debian contributions, I m a little less muddled by my disappointment of the exclusion of a 100% free Debian. I realize that today I can use Debian in the same way that I use macOS, Android, RHEL or Ubuntu. And what prevents me from contributing to free software on those platforms? So I will make the occasional Debian contribution again, knowing that it will also indirectly improve Trisquel. To avoid having to install Debian, I need a development environment in Trisquel that allows me to build Debian packages. I have found a recipe for doing this: # System commands:
sudo apt-get install debhelper git-buildpackage debian-archive-keyring
sudo wget -O /usr/share/debootstrap/scripts/debian-common https://sources.debian.org/data/main/d/debootstrap/1.0.128%2Bnmu2/scripts/debian-common
sudo wget -O /usr/share/debootstrap/scripts/sid https://sources.debian.org/data/main/d/debootstrap/1.0.128%2Bnmu2/scripts/sid
# Run once to create build image:
DIST=sid git-pbuilder create --mirror http://deb.debian.org/debian/ --debootstrapopts "--exclude=usr-is-merged" --basepath /var/cache/pbuilder/base-sid.cow
# Run in a directory with debian/ to build a package:
gbp buildpackage --git-pbuilder --git-dist=sid
How to sustainably deliver a 100% free software binary distributions seems like an open question, and the challenges are not all that different compared to the 1990 s or early 2000 s. I m hoping Debian will come back to provide a 100% free platform, but my fear is that Debian will compromise even further on the free software ideals rather than the opposite. With similar arguments that were used to add the non-free firmware, Debian could compromise the free software spirit of the Linux boot process (e.g., non-free boot images signed by Debian) and media handling (e.g., web browsers and DRM), as Debian have already done with appstore-like functionality for non-free software (Python pip). To learn about other freedom issues in Debian packaging, browsing Trisquel s helper scripts may enlight you. Debian s setback and the recent setback for RHEL-derived distributions are sad, and it will be a challenge for these communities to find internally consistent coherency going forward. I wish them the best of luck, as Debian and RHEL are important for the wider free software eco-system. Let s see how the community around Trisquel, Guix and the other FSDG-distributions evolve in the future. The situation for free software today appears better than it was years ago regardless of Debian and RHEL s setbacks though, which is important to remember! I don t recall being able install a 100% free OS on a modern laptop and modern server as easily as I am able to do today. Happy Hacking! Addendum 22 July 2023: The original title of this post was Coping with non-free Debian, and there was a thread about it that included feedback on the title. I do agree that my initial title was confrontational, and I ve changed it to the more specific Coping with non-free software in Debian. I do appreciate all the fine free software that goes into Debian, and hope that this will continue and improve, although I have doubts given the opinions expressed by the majority of developers. For the philosophically inclined, it is interesting to think about what it means to say that a compilation of software is freely licensed. At what point does a compilation of software deserve the labels free vs non-free? Windows probably contains some software that is published as free software, let s say Windows is 1% free. Apple authors a lot of free software (as a tangent, Apple probably produce more free software than what Debian as an organization produces), and let s say macOS contains 20% free software. Solaris (or some still maintained derivative like OpenIndiana) is mostly freely licensed these days, isn t it? Let s say it is 80% free. Ubuntu and RHEL pushes that closer to let s say 95% free software. Debian used to be 100% but is now slightly less at maybe 99%. Trisquel and Guix are at 100%. At what point is it reasonable to call a compilation free? Does Debian deserve to be called freely licensed? Does macOS? Is it even possible to use these labels for compilations in any meaningful way? All numbers just taken from thin air. It isn t even clear how this can be measured (binary bytes? lines of code? CPU cycles? etc). The caveat about license review mistakes applies. I ignore Debian s own claims that Debian is 100% free software, which I believe is inconsistent and no longer true under any reasonable objective analysis. It was not true before the firmware vote since Debian ships with non-free blobs in the Linux kernel for example.

10 July 2023

Lukas M rdian: Netplan and systemd-networkd on Debian Bookworm

Debian s cloud-images are using systemd-networkd as their default network stack in Bookworm. A slim and feature rich networking daemon that comes included with Systemd itself. Debian s cloud-images are deploying Netplan on top of this as an easy-to-use, declarative control layer. If you want to experiment with systemd-networkd and Netplan on Debian, this can be done easily in QEMU using the official images. To start, you need to download the relevant .qcow2 Debian cloud-image from: https://cloud.debian.org/images/cloud/bookworm/latest/
$ wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2

Prepare a cloud image Next, you need to prepare some configuration files for cloud-init and Netplan, to prepare a data-source (seed.img) for your local cloud-image.
$ cat > meta.yaml <<EOF
instance-id: debian01
local-hostname: cloudimg
EOF
$ cat > user.yaml <<EOF
#cloud-config
ssh_pwauth: true
password: test
chpasswd:
  expire: false
EOF
$ cat > netplan.yaml <<EOF
network:
  version: 2
  ethernets:
    id0:
      match:
        macaddress: "ca:fe:ca:fe:00:aa"
      dhcp4: true
      dhcp6: true
      set-name: lan0
EOF
Once all configuration is prepared, you can create the local data-source image, using the cloud-localds tool from the cloud-image-utils package:
$ cloud-localds --network-config=netplan.yaml seed.img user.yaml meta.yaml

Launch the local VM Now, everything is prepared to launch a QEMU VM with two NICs and do some experimentation! The following command will launch an ephemeral environment for you, keeping the original Debian cloud-image untouched. If you want to preserve any changes on disk, you can remove the trailing -snapshot parameter.
$ qemu-system-x86_64 \
  -machine accel=kvm,type=q35 \
  -cpu host \
  -m 2G \
  -device virtio-net-pci,netdev=net0,mac=ca:fe:ca:fe:00:aa \
  -netdev user,id=net0,hostfwd=tcp::2222-:22 \
  -nic user,model=virtio-net-pci,mac=f0:0d:ca:fe:00:bb \
  -drive if=virtio,format=qcow2,file=debian-12-generic-amd64.qcow2 \
  -drive if=virtio,format=raw,file=seed.img -snapshot
We set up the default debian user account through cloud-init s user-data configuration above, so you can now login to the system, using that user with the (very unsafe!) password test .
$ ssh -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null" -p 2222 debian@localhost # password: test

Experience Netplan and systemd-networkd Once logged in successfully, you can execute the netplan status command to check the system s network configuration, as configured through cloud-init s netplan.yaml passthrough. So you ve already used Netplan at this point implicitly and it did all the configuration of systemd-networkd for you in the background!
debian@cloudimg:~$ sudo netplan status -a
     Online state: online
    DNS Addresses: 10.0.2.3 (compat)
       DNS Search: .
   1: lo ethernet UNKNOWN/UP (unmanaged)
      MAC Address: 00:00:00:00:00:00
        Addresses: 127.0.0.1/8
                   ::1/128
           Routes: ::1 metric 256
   2: enp0s2 ethernet DOWN (unmanaged)
      MAC Address: f0:0d:ca:fe:00:bb (Red Hat, Inc.)
   3: lan0 ethernet UP (networkd: id0)
      MAC Address: ca:fe:ca:fe:00:aa (Red Hat, Inc.)
        Addresses: 10.0.2.15/24 (dhcp)
                   fec0::c8fe:caff:fefe:aa/64
                   fe80::c8fe:caff:fefe:aa/64 (link)
    DNS Addresses: 10.0.2.3
           Routes: default via 10.0.2.2 from 10.0.2.15 metric 100 (dhcp)
                   10.0.2.0/24 from 10.0.2.15 metric 100 (link)
                   10.0.2.2 from 10.0.2.15 metric 100 (dhcp, link)
                   10.0.2.3 from 10.0.2.15 metric 100 (dhcp, link)
                   fe80::/64 metric 256
                   fec0::/64 metric 100 (ra)
                   default via fe80::2 metric 100 (ra)
As you can see from this output, the lan0 interface is configured via the id0 Netplan ID to be managed by systemd-networkd. Compare this data to the netplan.yaml file above, the networkctl output, the local Netplan configuration in /etc/netplan/ and the auto-generated systemd-networkd configuration.
debian@cloudimg:~$ networkctl 
IDX LINK   TYPE     OPERATIONAL SETUP     
  1 lo     loopback carrier     unmanaged
  2 enp0s2 ether    off         unmanaged
  3 lan0   ether    routable    configured
3 links listed.
debian@cloudimg:~$ cat /etc/netplan/50-cloud-init.yaml 
# [...]
network:
    ethernets:
        id0:
            dhcp4: true
            dhcp6: true
            match:
                macaddress: ca:fe:ca:fe:00:aa
            set-name: lan0
    version: 2

debian@cloudimg:~$ ls -l /run/systemd/network/
total 8
-rw-r--r-- 1 root root  78 Jul  5 15:23 10-netplan-id0.link
-rw-r--r-- 1 root root 137 Jul  5 15:23 10-netplan-id0.network
Now you can go ahead and try something more advanced, like link aggregation, using the second NIC that you configured for this QEMU VM and explore all the possibilities of Netplan on Debian, by checking the Netplan YAML documentation.

8 July 2023

Russell Coker: Sandboxing Phone Apps

As a follow up to Wayland [1]: A difficult problem with Linux desktop systems (which includes phones and tablets) is restricting application access so that applications can t mess with each other s data or configuration but also allowing them to share data as needed. This has been mostly solved for Android but that involved giving up all legacy Linux apps. I think that we need to get phones capable of running a full desktop environment and having Android level security on phone apps and regular desktop apps. My interest in this is phones running Debian and derivatives such as PureOS. But everything I describe in this post should work equally well for all full featured Linux distributions for phones such as Arch, Gentoo, etc and phone based derivatives of those such as Manjaro. It may be slightly less applicable to distributions such as Alpine Linux and it s phone derivative PostmarketOS, I would appreciate comments from contributors to PostmarketOS or Alpine Linux about this. I ve investigated some of the ways of solving these problems. Many of the ways of doing this involves namespaces. The LWN articles about namespaces are a good background to some of these technologies [2]. The LCA keynote lecture Containers aka crazy user space fun by Jess Frazelle has a good summary of some of the technology [3]. One part that I found particularly interesting was the bit about recognising the container access needed at compile time. This can also be part of recognising bad application design at compile time, it s quite common for security systems to flag bad security design in programs. Firejail To sandbox applications you need to have some method of restricting what they do, this means some combination of namespaces and similar features. Here s an article on sandboxing with firejail [4]. Firejail uses namespaces, seccomp-bpf, and capabilities to restrict programs. It allows bind mounts if run as root and if not run as root it can restrict file access by name and access to networking, system calls, and other features. It has a convenient learning mode that generates policy for you, so if you have a certain restricted set of tasks that an application is to perform you can run it once and then force it to do only the same operations in future. I recommend that everyone who is still reading at this point try out firejail. Here s an example of what you can do:
# create a profile
firejail --build=xterm.profile xterm
# now this run can only do what the previous run did
firejail --profile=xterm.profile xterm
Note that firejail is SETUID root so can potentially reduce system security and it has had security issues in the past. In spite of that it can be good for allowing a trusted user to run programs with less access to the system. Also it is a good way to start learning about such things. I don t think it s a good solution for what I want to do. But I won t rule out the possibility of using it at some future time for special situations. Bubblewrap I tried out firejail with the browser Epiphany (Debian package epiphany-browser) on my Librem5, but that didn t work as Epiphany uses /usr/bin/bwrap (bubblewrap) for it s internal sandboxing (here is an informative blog post about the history of bubblewrap AKA xdg-app-helper which was developed as part of flatpak [5]). The Epiphany bubblewrap sandbox is similar to the situation with Chrome/Chromium which have internal sandboxing that s incompatible with firejail. The firejail man page notes that it s not compatible with Snap, Flatpack, and similar technologies. One issue this raises is that we can t have a namespace based sandboxing system applied to all desktop apps with no extra configuration as some desktop apps won t work with it. Bubblewrap requires setting kernel.unprivileged_userns_clone=1 to run as non-root (IE provide the normal and expected functionality) which potentially reduces system security. Here is an example of a past kernel bug that was exploitable by creating a user namespace with CAP_SYS_ADMIN [6]. But it s the default in recent Debian kernels which means that the issues have been reviewed and determined to be a reasonable trade-off and also means that many programs will use the feature and break if it s disabled. Here is an example of how to use Bubblewrap on Debian, after installing the bubblewrap run the following command. Note that the new-session option (to prevent injecting characters in the keyboard buffer with TIOCSTI) makes the session mostly unusable for a shell.
bwrap --ro-bind /usr /usr --symlink usr/lib64 /lib64 --symlink usr/lib /lib --proc /proc --dev /dev --unshare-pid --die-with-parent bash
Here is an example of using Bubblewrap to sandbox the game Warzone2100 running with Wayland/Vulkan graphics and Pulseaudio sound.
bwrap --bind $HOME/.local/share/warzone2100 $HOME/.local/share/warzone2100 --bind /run/user/$UID/pulse /run/user/$UID/pulse --bind /run/user/$UID/wayland-0 /run/user/$UID/wayland-0 --bind /run/user/$UID/wayland-0.lock /run/user/$UID/wayland-0.lock --ro-bind /usr /usr --symlink usr/bin /bin --symlink usr/lib64 /lib64 --symlink usr/lib /lib --proc /proc --dev /dev --unshare-pid --dev-bind /dev/dri /dev/dri --ro-bind $HOME/.pulse $HOME/.pulse --ro-bind $XAUTHORITY $XAUTHORITY --ro-bind /sys /sys --new-session --die-with-parent warzone2100
Here is an example of using Bubblewrap to sandbox the Debian bug reporting tool reportbug
bwrap --bind /tmp /tmp --ro-bind /etc /etc --ro-bind /usr /usr --ro-bind /var/lib/dpkg /var/lib/dpkg --symlink usr/sbin /sbin --symlink usr/bin /bin --symlink usr/lib64 /lib64 --symlink usr/lib /lib --symlink /usr/lib32 /lib32 --symlink /usr/libx32 /libx32 --proc /proc --dev /dev --die-with-parent --unshare-ipc --unshare-pid reportbug
Here is an example shell script to wrap the build process for Debian packages. This needs to run with unshare-user and specifying the UID as 0 because fakeroot doesn t work in the container, I haven t worked out why but doing it through the container is a better method anyway. This script shares read-write the parent of the current directory as the Debian build process creates packages and metadata files in the parent directory. This will prevent the automatic signing scripts which is a feature not a bug, so after building packages you have to sign the .changes file with debsign. One thing I just learned is that the Debian build system Sbuild can use chroots for building packages for similar benefits [7]. Some people believe that sbuild is the correct way of doing it regardless of the chroot issue. I think it s too heavy-weight for most of my Debian package building, but even if I had been convinced I d still share the information about how to use bwrap as Debian is about giving users choice.
#!/bin/bash
set -e
BUILDDIR=$(realpath $(pwd)/..)
exec bwrap --bind /tmp /tmp --bind $BUILDDIR $BUILDDIR --ro-bind /etc /etc --ro-bind /usr /usr --ro-bind /var/lib/dpkg /var/lib/dpkg --symlink usr/bin /bin --symlink usr/lib64 /lib64 --symlink usr/lib /lib --proc /proc --dev /dev --die-with-parent --unshare-user --unshare-ipc --unshare-net --unshare-pid --new-session --uid 0 --gid 0 $@
Here is an informative blog post about using Bubblewrap with Seccomp (BPF) [8]. In a future post I ll write about how to get this sort of thing going but I ll just leave the URL here for people who want to do it on their own. The source for the flatpak-run program is the only example I could find of using Seccomp with Bubblewrap [9]. A lot of that code is worth copying for application sandboxing, maybe the entire program. Unshare The unshare command from the util-linux package has a large portion of the Bubblewrap functionality. The things that it doesn t do like creating a new session can be done by other utilities. Here is an example of creating a container with unshare and then using cgroups with it [10]. systemd --user Recent distributions have systemd support for running a user session, the Arch Linux Wiki has a good description of how this works [11]. The units for a user are .service files stored in /usr/lib/systemd/user/ (distribution provided), ~/.local/share/systemd/user/ (user installed applications in debian a link to ~/.config/systemd/user/), ~/.config/systemd/user/ (for manual user config), and /etc/systemd/user/ (local sysadmin provided) Here are some example commands for manipulating this:
# show units running for the current user
systemctl --user
# show status of one unit
systemctl --user status kmail.service
# add an environment variable to the list for all user units
systemctl --user import-environment XAUTHORITY
# start a user unit
systemctl --user start kmail.service
# analyse security for all units for the current user
systemd-analyze --user security
# analyse security for one unit
systemd-analyze --user security kmail.service
Here is a test kmail.service file I wrote to see what could be done for kmail, I don t think that kmail is the app most needing to be restricted it is in more need of being protected from other apps but it still makes a good test case. This service file took it from the default risk score of 9.8 (UNSAFE) to 6.3 (MEDIUM) even though I was getting the error code=exited, status=218/CAPABILITIES when I tried anything that used capabilities (apparently due to systemd having some issue talking to the kernel).
[Unit]
Description=kmail
[Service]
ExecStart=/usr/bin/kmail
# can not limit capabilities (code=exited, status=218/CAPABILITIES)
#CapabilityBoundingSet=~CAP_SYS_TIME CAP_SYS_PACCT CAP_KILL CAP_WAKE_ALARM CAP_DAC_OVERRIDE CAP_DAC_READ_SEARCH CAP_FOWNER CAP_IPC_OWNER CAP_LINUX_IMMUTABLE CAP_IPC_LOCK CAP_SYS_MODULE CAP_SYS_TTY_CONFIG CAP_SYS_BOOT CAP_SYS_CHROOT CAP_BLOCK_SUSPEND CAP_LEASE CAP_MKNOD CAP_CHOWN CAP_FSETID CAP_SETFCAP CAP_SETGID CAP_SETUID CAP_SETPCAP CAP_SYS_RAWIO CAP_SYS_PTRACE CAP_SYS_NICE CAP_SYS_RESOURCE CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SYS_ADMIN CAP_SYSLOG
# also 218 for ProtectKernelModules PrivateDevices ProtectKernelLogs ProtectClock
# MemoryDenyWriteExecute stops it displaying message content (bad)
# needs @resources and @mount to startup
# needs @privileged to display message content
SystemCallFilter=~@cpu-emulation @debug @raw-io @reboot @swap @obsolete
SystemCallArchitectures=native
UMask=077
NoNewPrivileges=true
ProtectControlGroups=true
PrivateMounts=false
RestrictNamespaces=~user pid net uts mnt cgroup ipc
RestrictSUIDSGID=true
ProtectHostname=true
LockPersonality=true
ProtectKernelTunables=true
RestrictAddressFamilies=~AF_PACKET
RestrictRealtime=true
ProtectSystem=strict
ProtectProc=invisible
PrivateUsers=true
[Install]
When I tried to use the TemporaryFileSystem=%h directive (to make the home directory a tmpfs the most basic step in restricting what a regular user application can do) I got the error (code=exited, status=226/NAMESPACE) . So I don t think the systemd user setup competes with bubblewrap for restricting user processes. But if anyone else can start where I left off and go further then that will be interesting. Systemd-run The following shell script runs firefox as a dynamic user via systemd-run, running this asks for the root password and any mechanism for allowing that sort of thing opens potential security holes. So at this time while it s an interesting feature I don t think it is suitable for running regular applications on a phone or Linux desktop.
#!/bin/bash
# systemd-run Firefox with DynamicUser and home directory.
#
# Run as a non-root user.
# Or, run as root and change $USER below.
SANDBOX_MINIMAL=(
    --property=DynamicUser=1
    --property=StateDirectory=openstreetmap
    # --property=RootDirectory=/debian_sid
)
SANDBOX_X11=(
    # Sharing Xorg always defeats security, regardless of any sandboxing tech,
    # but the config is almost ready for Wayland, and there's Xephyr.
#    --property=LoadCredential=.ICEauthority:/home/$USER/.ICEauthority
    --property=LoadCredential=.Xauthority:/home/$USER/.Xauthority
    --property=Environment=DISPLAY=:0
)
SANDBOX_FIREFOX=(
    # hardware-accelerated rendering
    --property=BindPaths=/dev/dri
    # webcam
    # --property=SupplementaryGroups=video
)
systemd-run  \
    "$ SANDBOX_MINIMAL[@] "  "$ SANDBOX_X11[@] " "$ SANDBOX_FIREFOX[@] " \
    bash -c '
        export XAUTHORITY="$CREDENTIALS_DIRECTORY"/.Xauthority
        export ICEAUTHORITY="$CREDENTIALS_DIRECTORY"/.ICEauthority
        export HOME="$STATE_DIRECTORY"/home
        firefox --no-remote about:blank
    '
Qubes OS Here is an interesting demo video of QubesOS [12] which shows how it uses multiple VMs to separate different uses. Here is an informative LCA presentation about Qubes which shows how it asks the user about communication between VMs and access to hardware [13]. I recommend that everyone who hasn t seen Qubes in operation watch the first video and everyone who isn t familiar with the computer science behind it watch the second video. Qubes appears to be a free software equivalent to NetTop as far as I can tell without ever being able to use NetTop. I don t think Qubes is a good match for my needs in general use and it definitely isn t a good option for phones (VMs use excessive CPU on phones). But it s methods for controlling access have some ideas that are worth copying. File Access XDG Desktop Portal One core issue for running sandboxed applications is to allow them to access files permitted by the user but no other files. There are two main parts to this problem, the easier one is to have each application have it s own private configuration directory which can be addressed by bind mounts, MAC systems, running each application under a different UID or GID, and many other ways. The hard part of file access is to allow the application to access random files that the user wishes. For example I want my email program, IM program, and web browser to be able to save files and also to be able to send arbitrary files via email, IM, and upload to web sites. But I don t want one of those programs to be able to access all the files from the others if it s compromised. So only giving programs access to arbitrary files when the user chooses such a file makes sense. There is a package xdg-desktop-portal which provides a dbus interface for opening files etc for a sandboxed application [14]. This portal has backends for KDE, GNOME, and Wayland among others which allow the user to choose which file or files the application may access. Chrome/Chromium is one well known program that uses the xdg-desktop-portal and does it s own sandboxing. To use xdg-desktop-portal an application must be modified to use that interface instead of opening files directly, so getting this going with all Internet facing applications will take some work. But the documentation notes that the portal API gives a consistent user interface for operations such as opening files so it can provide benefits even without a sandboxed environment. This technology was developed for Flatpak and is now also used for Snap. It also has a range of APIs for accessing other services [15]. Flatpak Flatpack is a system for distributing containerised versions of applications with some effort made to improve security. Their development of bubblewrap and xdg-desktop-portal is really good work. However the idea of having software packaged with all libraries it needs isn t a good one, here s a blog post covering some of the issues [16]. The web site flatkill.org has been registered to complain about some Flatpak problems [17]. They have some good points about the approach that Flatpak project developers have taken towards some issues. They also make some points about the people who package software not keeping up to date with security fixes and not determining a good security policy for their pak. But this doesn t preclude usefully using parts of the technology for real security benefits. If parts of Flatpak like Bubblewrap and xdg-portal are used with good security policies on programs that are well packaged for a distribution then these issues would be solved. The Flatpak app author s documentation about package requirements [18] has an overview of security features that is quite reasonable. If most paks follow that then it probably isn t too bad. I reviewed the manifests of a few of the recent paks and they seemed to have good settings. In the amount of time I was prepared to spend investigating this I couldn t find evidence to support the Flatkill claim about Flatpaks granting obviously inappropriate permissions. But the fact that the people who run Flathub decided to put a graph of installs over time on the main page for each pak while making the security settings only available by clicking the Manifest github link, clicking on a JSON or YAML file, and then searching for the right section in that shows where their priorities lie. The finish-args section of the Flatpak manifest (the section that describes the access to the system) seems reasonably capable and not difficult for users to specify as well as being in common use. It seems like it will be easy enough to take some code from Flatpak for converting the finish-args into Bubblewrap parameters and then use the manifest files from Flathub as a starting point for writing application security policy for Debian packages. Snap Snap is developed by Canonical and seems like their version of Flatpak with some Docker features for managing versions, the Getting Started document is worth reading [19]. They have Connections between different snaps and the system where a snap registers a plug that connects to a socket which can be exposed by the system (EG the camera socket) or another snap. The local admin can connect and disconnect them. The connections design reminds me of the Android security model with permitting access to various devices. The KDE Neon extension [20] has been written to add Snap support to KDE. Snap seems quite usable if you have an ecosystem of programs using it which Canonical has developed. But it has all the overheads of loopback mounts etc that you don t want on a mobile device and has the security issues of libraries included in snaps not being updated. A quick inspection of an Ubuntu 22.04 system I run (latest stable release) has Firefox 114.0.2-1 installed which includes libgcrypt.so.20.2.5 which is apparently libgcrypt 1.8.5 and there are CVEs relating to libgcrypt versions before 1.9.4 and 1.8.x versions before 1.8.8 which were published in 2021 and updated in 2022. Further investigation showed that libgcrypt came from the gnome-3-38-2004 snap (good that it doesn t require all shared objects to be in the same snap, but that it has old versions in dependencies). The gnome-3-38-2004 snap is the latest version so anyone using the Snap of Firefox seems to have no choice but to have what appears to be insecure libraries. The strict mode means that the Snap in question has no system access other than through interfaces [21]. SE Linux and Apparmor The Librem5 has Apparmor running by default. I looked into writing Apparmor policy to prevent Epiphany from accessing all files under the home directory, but that would be a lot of work. Also at least one person has given up on maintaining an Epiphany profile for Apparmor because it changes often and it s sandbox doesn t work well with Apparmor [22]. This was not a surprise to me at all, SE Linux policy has the same issues as Apparmor in this regard. The Ubuntu Security Team Application Confinement document [23] is worth reading. They have some good plans for using AppArmor as part of solving some of these problems. I plan to use SE Linux for that. Slightly Related Things One thing for the future is some sort of secure boot technology, the LCA lecture Becoming a tyrant: Implementing secure boot in embedded devices [24] has some ideas for the future. The Betrusted project seems really interesting, see Bunnie s lecture about how to create a phone size security device with custom OS [25]. The Betrusted project web page is worth reading too [26]. It would be ironic to have a phone as your main PC that is the same size as your security device, but that seems to be the logical solution to several computing problems. Whonix is a Linux distribution that has one VM for doing Tor stuff and another VM for all other programs which is only allowed to have network access via the Tor VM [27]. Xpra does for X programs what screen/tmux do for text mode programs [28]. It allows isolating X programs from each other in ways that are difficult to impossible with a regular X session. In an ideal situation we could probably get the benefits we need with just using Wayland, but if there are legacy apps that only have X support this could help. Conclusion I think that currently the best option for confining desktop apps is Bubblewrap on Wayland. Maybe with a modified version of Flatpak-run to run it and with app modifications to use the xdg-portal interfaces as much as possible. That should be efficient enough in storage space, storage IO performance, memory use, and CPU use to run on phones while giving some significant benefits. Things to investigate are how much code from Flatpak to use, how to most efficiently do the configuration (maybe the Flatpak way because it s known and seems effective), how to test this (and have regression tests), and what default settings to use. Also BPF is a big thing to investigate.

7 July 2023

Dirk Eddelbuettel: Rcpp 1.0.11 on CRAN: Updates and Maintenance

rcpp logo The Rcpp Core Team is delighted to announce that the newest release 1.0.11 of the Rcpp package arrived on CRAN and in Debian earlier today. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course at r2u. The release was finalized three days ago, but given the widespread use and extended reverse dependencies at CRAN it usually takes a few days to be processed. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As a reminder, we do of course make interim snapshot dev or rc releases available via the Rcpp drat repo and strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2720 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 251 in BioConductor. On CRAN, 13.7% of all packages depend (directly) on Rcpp, and 59.6% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 72.5 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1678 (JSS, 2011) and 259 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 588. This release is incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp version 1.0.11 (2023-07-03)
  • Changes in Rcpp API:
    • Rcpp:::CxxFlags() now quotes only non-standard include path on linux (Lukasz in #1243 closing #1242).
    • Two unit tests no longer accidentally bark on stdout (Dirk and I aki in #1245).
    • Compilation under C++20 using clang++ and its standard library is enabled (Dirk in #1248 closing #1244).
    • Use backticks in a generated .Call() statement in RcppExports.R (Dirk #1256 closing #1255).
    • Switch to system2() to capture standard error messages in error cases (I aki in #1259 and #1261 fixing #1257).
  • Changes in Rcpp Documentation:
    • The CITATION file format has been updated (Dirk in #1250 fixing #1249).
  • Changes in Rcpp Deployment:
    • A test for qnorm now uses the more accurate value from R 4.3.0 (Dirk in #1252 and #1260 fixing #1251).
    • Skip tests with path issues on Windows (I aki in #1258).
    • Container deployment in continuous integrations was improved. (I aki and Dirk in #1264, Dirk in #1269).
    • Several files receives minor edits to please R CMD check from r-devel (Dirk in #1267).

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2994 previous questions. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

1 July 2023

Debian Brasil: MiniDebConf Bras lia 2023 - a brief report

Minidebconf2033 palco From May 25th to 27th, Bras lia hosted the MiniDebConf 2023. This gathering, composed of various activities such as talks, workshops, sprints, BSP (Bug Squashing Party), key signing, social events, and hacking, aimed to bring the community together and celebrate the world's largest Free Software project: Debian. The MiniDebConf Bras lia 2023 was a success thanks to the participation of everyone, regardless of their level of knowledge about Debian. We valued the presence of both beginners who are getting familiar with the system and official project developers. The spirit of inclusion and collaboration was present throughout the event. MiniDebConfs are local meetings organized by members of the Debian Project, aiming to achieve similar goals as DebConf but on a regional scale. Throughout the year, events like this occur in different parts of the world, strengthening the Debian community. Minidebconf2023 placa Activities The MiniDebConf program was intense and diverse. On May 25th and 26th (Thursday and Friday), we had talks, discussions, workshops, and many hands-on activities. On the 27th (Saturday), the Hacking Day took place, which was a special moment for Debian contributors to come together and work collaboratively on various aspects of the project. This was the Brazilian version of Debcamp, a tradition preceding DebConf. On this day, we prioritized practical activities such as software packaging, translations, key signing, install fest, and the Bug Squashing Party. Minidebconf2023 auditorio

Minidebconf2023 oficina Edition numbers The event numbers are impressive and demonstrate the community's involvement with Debian. We had 236 registered participants, 20 submitted talks, 14 volunteers, and 125 check-ins. Furthermore, in the hands-on activities, we achieved significant results, including 7 new installations of Debian GNU/Linux, the update of 18 packages in the official Debian project repository by participants, and the inclusion of 7 new contributors to the translation team. We also highlight the remote participation of the community through live streams. The analytics data reveals that our website received a total of 7,058 views, with 2,079 views on the homepage (which featured our sponsors' logos), 3,042 views on the program page, and 104 views on the sponsors' page. We recorded 922 unique users during the event. On YouTube, the live stream reached 311 views, with 56 likes and a peak of 20 concurrent views. There were an incredible 85.1 hours of watch time, and our channel gained 30 new subscribers. All this engagement and interest from the community further strengthen MiniDebConf. Minidebconf2023 palestrantes Photos and videos To relive the best moments of the event, we have photos and recordings available. Photos can be accessed at: https://deb.li/pbsb2023. Video recordings of the talks are available at the following link: https://deb.li/vbsb2023. To stay updated and connect with the Debian Bras lia community, follow us on our social media channels: Thanks We would like to express our deep gratitude to all the participants, organizers, sponsors, and supporters who contributed to the success of MiniDebConf Bras lia 2023. In particular, we extend our thanks to our Gold sponsors: 2024. Pencillabs, Globo, Policorp, and Toradex Brasil, and our Silver sponsor, 4-Linux. We also thank Finatec and the Instituto para Conserva o de Tecnologias Livres (ICTL) for their support. Minidebconf2023 coffee MiniDebConf Bras lia 2023 was a milestone for the Debian community, demonstrating the power of collaboration and Free Software. We hope that everyone enjoyed this enriching gathering and will continue to actively participate in future Debian Project initiatives. Together, we can make a difference! Minidebconf2023 fotos oficial

Debian Brasil: MiniDebConf Bras lia 2023 - um breve relato

Minidebconf2033 palco No per odo de 25 a 27 de maio, Bras lia foi palco da MiniDebConf 2023. Esse encontro, composto por diversas atividades como palestras, oficinas, sprints, BSP (Bug Squashing Party), assinatura de chaves, eventos sociais e hacking, teve como principal objetivo reunir a comunidade e celebrar o maior projeto de Software Livre do mundo: o Debian. A MiniDebConf Bras lia 2023 foi um sucesso gra as participa o de todas e todos, independentemente do n vel de conhecimento sobre o Debian. Valorizamos a presen a tanto dos(as) usu rios(as) iniciantes que est o se familiarizando com o sistema quanto dos(as) desenvolvedores(as) oficiais do projeto. O esp rito de acolhimento e colabora o esteve presente em todos os momentos. As MiniDebConfs s o encontros locais organizados por membros do Projeto Debian, visando objetivos semelhantes aos da DebConf, por m em mbito regional. Ao longo do ano, eventos como esse ocorrem em diferentes partes do mundo, fortalecendo a comunidade Debian. Minidebconf2023 placa Atividades A programa o da MiniDebConf foi intensa e diversificada. Nos dias 25 e 26 (quinta e sexta-feira), tivemos palestras, debates, oficinas e muitas atividades pr ticas. J no dia 27 (s bado), ocorreu o Hacking Day, um momento especial em que os(as) colaboradores(as) do Debian se reuniram para trabalhar em conjunto em v rios aspectos do projeto. Essa foi a vers o brasileira da Debcamp, tradi o pr via DebConf. Nesse dia, priorizamos as atividades pr ticas de contribui o ao projeto, como empacotamento de softwares, tradu es, assinaturas de chaves, install fest e a Bug Squashing Party. Minidebconf2023 auditorio

Minidebconf2023 oficina N meros da edi o Os n meros do evento impressionam e demonstram o envolvimento da comunidade com o Debian. Tivemos 236 inscritos(as), 20 palestras submetidas, 14 volunt rios(as) e 125 check-ins realizados. Al m disso, nas atividades pr ticas, tivemos resultados significativos, como 7 novas instala es do Debian GNU/Linux, a atualiza o de 18 pacotes no reposit rio oficial do projeto Debian pelos participantes e a inclus o de 7 novos contribuidores na equipe de tradu o. Destacamos tamb m a participa o da comunidade de forma remota, por meio de transmiss es ao vivo. Os dados anal ticos revelam que nosso site obteve 7.058 visualiza es no total, com 2.079 visualiza es na p gina principal (que contava com o apoio de nossos patrocinadores), 3.042 visualiza es na p gina de programa o e 104 visualiza es na p gina de patrocinadores. Registramos 922 usu rios(as) nicos durante o evento. No YouTube, a transmiss o ao vivo alcan ou 311 visualiza es, com 56 curtidas e um pico de 20 visualiza es simult neas. Foram incr veis 85,1 horas de exibi o, e nosso canal conquistou 30 novos inscritos(as). Todo esse engajamento e interesse da comunidade fortalecem ainda mais a MiniDebConf. Minidebconf2023 palestrantes Fotos e v deos Para revivermos os melhores momentos do evento, temos dispon veis fotos e v deos. As fotos podem ser acessadas em: https://deb.li/pbsb2023. J os v deos com as grava es das palestras est o dispon veis no seguinte link: https://deb.li/vbsb2023. Para manter-se atualizado e conectar-se com a comunidade Debian Bras lia, siga-nos em nossas redes sociais: Agradecimentos Gostar amos de agradecer profundamente a todos(as) os(as) participantes, organizadores(as), patrocinadores e apoiadores(as) que contribu ram para o sucesso da MiniDebConf Bras lia 2023. Em especial, expressamos nossa gratid o aos patrocinadores Ouro: Pencillabs, Globo, Policorp e Toradex Brasil, e ao patrocinador Prata, 4-Linux. Tamb m agradecemos Finatec e ao Instituto para Conserva o de Tecnologias Livres (ICTL) pelo apoio. Minidebconf2023 coffee A MiniDebConf Bras lia 2023 foi um marco para a comunidade Debian, demonstrando o poder da colabora o e do Software Livre. Esperamos que todas e todos tenham desfrutado desse encontro enriquecedor e que continuem participando ativamente das pr ximas iniciativas do Projeto Debian. Juntos, podemos fazer a diferen a! Minidebconf2023 fotos oficial

30 June 2023

Russell Coker: Links June 2023

Tablet Magazine has an interesting article about Jewish men who fought in the military for Nazi Germany [1]. I m surprised that they didn t frag their colleagues. Dropbox has an insightful interview with a lawyer about the future of machine learning in the legal profession [2]. This seems like it could give real benefits to society in giving legal assistance to more people and giving less uncertainty about the result of court cases. It could also find unclear laws for legislators who want to improve things. Some people have started a software to produce a free software version of Victoria 2 [3]. Hopefully OpenVic will become as successful as FreeCiv and FreeCraft! Hackster has an interesting article about work to create a machine that does a realistic impersonation of someone s handwriting [4]. The aim is to be good enough to fool people who want manually written assignments. Ars technica has an interesting article about a side channel attack using the power LEDs of smart-card readers to extract cryptographic secret key data [5]. As usual for articles about side channels it turns out to be really hard to do and their proof of concept involved recording a card being repeatedly scanned for an hour. This doesn t mean it s a non-issue, they should harden readers against this. Vice has an interesting article on the search for chemical remnants of ancient organisms in 1.6 billion year old fossils [6]. Bleeping Computer has an interesting article about pirate Windows 10 ISOs infecting systems with EFI malware [7]. That s a particularly nasty attack and shows yet another down-side to commercial software. For Linux the ISOs are always clean and the systems aren t contaminated. The Register has an interesting article about a robot being used for chilled RAM attacks to get access to boot time secrets [8]. They monitor EMF output to stop it at the same time in each boot which I consider the most noteworthy part of this attack. The BBC has an interesting article about personalised medicine [9]. There are 400 million people in the world with rare diseases and an estimated 60 million of them will die before the age of 5. Personalised medicine can save many lives. Let s hope it is used outside the first world. Knuth s thoughts about ChatGPT are interesting [10]. Interesting article about Brown M&Ms and assessing the likely quality of work from a devops team [11]. The ABC has an interesting article about the use of AI and robot traps to catch feral cats [12].

16 June 2023

John Goerzen: Using git-annex for Data Archiving

In my recent post about data archiving to removable media, I laid out the difference between backing up and archiving, and also said I d evaluate git-annex and dar. This post evaluates git-annex. The next will look at dar, and then I ll make a comparison post. What is git-annex? git-annex is a fantastic and versatile program that does well, it s one of those things that can do so much that it s a bit hard to describe. Its homepage says:
git-annex allows managing large files with git, without storing the file contents in git. It can sync, backup, and archive your data, offline and online. Checksums and encryption keep your data safe and secure. Bring the power and distributed nature of git to bear on your large files with git-annex.
I think the particularly interesting features of git-annex aren t actually included in that list. Among the features of git-annex that make it shine for this purpose, its location tracking is key. git-annex can know exactly which device has which file at which version at all times. Combined with its preferred content settings, this lets you very easily say things like: git-annex can be set to allow a configurable amount of free space to remain on a device, and it will fill it up with whatever copies are necessary up until it hits that limit. Very convenient! git-annex will store files in a folder structure that mirrors the origin folder structure, in plain files just as they were. This maximizes the ability for a future person to access the content, since it is all viewable without any special tool at all. Of course, for things like optical media, git-annex will essentially be creating what amounts to incrementals. To obtain a consistent copy of the original tree, you would still need to use git-annex to process (export) the archives. git-annex challenges In my prior post, I related some challenges with git-annex. The biggest of them quite poor performance of the directory special remote when dealing with many files has been resolved by Joey, git-annex s author! That dramatically improves the git-annex use scenario here! The fixing commit is in the source tree but not yet in a release. git-annex no doubt may still have performance challenges with repositories in the 100,000+-range, but in that order of magnitude it now looks usable. I m not sure about 1,000,000-file repositories (I haven t tested); there is a page about scalability. A few other more minor challenges remain: I worked around the timestamp issue by using the mtree-netbsd package in Debian. mtree writes out a summary of files and metadata in a tree, and can restore them. To save: mtree -c -R nlink,uid,gid,mode -p /PATH/TO/REPO -X <(echo './.git') > /tmp/spec And, after restoration, the timestamps can be applied with: mtree -t -U -e < /tmp/spec Walkthrough: initial setup To use git-annex in this way, we have to do some setup. My general approach is this: Let's get started! I've set all these shell variables appropriately for this example, and REPONAME to "testdata". We'll begin by setting up the metadata-only tracking repo.
$ REPONAME=testdata
$ mkdir "$METAREPO"
$ cd "$METAREPO"
$ git init
$ git config annex.thin true
There is a sort of complicated topic of how git-annex stores files in a repo, which varies depending on whether the data for the file is present in a given repo, and whether the file is locked or unlocked. Basically, the options I use here cause git-annex to mostly use hard links instead of symlinks or pointer files, for maximum compatibility with non-POSIX filesystems such as NTFS and UDF, which might be used on these devices. thin is part of that. Let's continue:
$ git annex init 'local hub'
init local hub ok
(recording state in git...)
$ git annex wanted . "include=* and exclude=$REPONAME/*"
wanted . ok
(recording state in git...)
In a bit, we are going to import the source data under the directory named $REPONAME (here, testdata). The wanted command says: in this repository (represented by the bare dot), the files we want are matched by the rule that says eveyrthing except what's under $REPONAME. In other words, we don't want to make an unnecessary copy here. Because I expect to use an mtree file as documented above, and it is not under $REPONAME/, it will be included. Let's just add it and tweak some things.
$ touch mtree
$ git annex add mtree
add mtree
ok
(recording state in git...)
$ git annex sync
git-annex sync will change default behavior to operate on --content in a future version of git-annex. Recommend you explicitly use --no-content (or -g) to prepare for that change. (Or you can configure annex.synccontent)
commit
[main (root-commit) 6044742] git-annex in local hub
1 file changed, 1 insertion(+)
create mode 120000 mtree
ok
$ ls -l
total 9
lrwxrwxrwx 1 jgoerzen jgoerzen 178 Jun 15 22:31 mtree -> .git/annex/objects/pX/ZJ/...
OK! We've added a file, and it got transformed into a symlink. That's the thing I said we were going to avoid, so:
git annex adjust --unlock-present
adjust
Switched to branch 'adjusted/main(unlockpresent)'
ok
$ ls -l
total 1
-rw-r--r-- 2 jgoerzen jgoerzen 0 Jun 15 22:31 mtree
You'll notice it transformed into a hard link (nlinks=2) file. Great! Now let's import the source data. For that, we'll use the directory special remote.
$ git annex initremote source type=directory directory=$SOURCEDIR importtree=yes \
encryption=none
initremote source ok
(recording state in git...)
$ git annex enableremote source directory=$SOURCEDIR
enableremote source ok
(recording state in git...)
$ git config remote.source.annex-readonly true
$ git config annex.securehashesonly true
$ git config annex.genmetadata true
$ git config annex.diskreserve 100M
$ git config remote.source.annex-tracking-branch main:$REPONAME
OK, so here we created a new remote named "source". We enabled it, and set some configuration. Most notably, that last line causes files from "source" to be imported under $REPONAME/ as we wanted earlier. Now we're ready to scan the source.
$ git annex sync
At this point, you'll see git-annex computing a hash for every file in the source directory. I can verify with du that my metadata-only repo only uses 14MB of disk space, while my source is around 4GB. Now we can see what git-annex thinks about file locations:
$ git-annex whereis less
whereis mtree (1 copy)
8aed01c5-da30-46c0-8357-1e8a94f67ed6 -- local hub [here]
ok
whereis testdata/[redacted] (0 copies)
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
failed
... many more lines ...
So remember we said we wanted mtree, but nothing under testdata, under this repo? That's exactly what we got. git-annex knows that the files under testdata can be found under the "source" special remote, but aren't in any git-annex repo -- yet. Now we'll start adding them. Walkthrough: removable drives I've set up two 500MB filesystems to represent removable drives. We'll see how git-annex works with them.
$ cd $DRIVE01
$ df -h .
Filesystem Size Used Avail Use% Mounted on
acrypt/no-backup/annexdrive01 500M 1.0M 499M 1% /acrypt/no-backup/annexdrive01
$ git clone $METAREPO
Cloning into 'testdata'...
done.
$ cd $REPONAME
$ git config annex.thin true
$ git annex init "test drive #1"
$ git annex adjust --hide-missing --unlock
adjust
Switched to branch 'adjusted/main(hidemissing-unlocked)'
ok
$ git annex sync
OK, that's the initial setup. Now let's enable the source remote and configure it the same way we did before:
$ git annex enableremote source directory=$SOURCEDIR
enableremote source ok
(recording state in git...)
$ git config remote.source.annex-readonly true
$ git config remote.source.annex-tracking-branch main:$REPONAME
$ git config annex.securehashesonly true
$ git config annex.genmetadata true
$ git config annex.diskreserve 100M
Now, we'll add the drive to a group called "driveset01" and configure what we want on it:
$ git annex group . driveset01
$ git annex wanted . '(not copies=driveset01:1)'
What this does is say: first of all, this drive is in a group named driveset01. Then, this drive wants any files for which there isn't already at least one copy in driveset01. Now let's load up some files!
$ git annex sync --content
As the messages fly by from here, you'll see it mentioning that it got mtree, and then various files from "source" -- until, that is, the filesystem had less than 100MB free, at which point it complained of no space for the rest. Exactly like we wanted! Now, we need to teach $METAREPO about $DRIVE01.
$ cd $METAREPO
$ git remote add drive01 $DRIVE01/$REPONAME
$ git annex sync drive01
git-annex sync will change default behavior to operate on --content in a future version of git-annex. Recommend you explicitly use --no-content (or -g) to prepare for that change. (Or you can configure annex.synccontent)
commit
On branch adjusted/main(unlockpresent)
nothing to commit, working tree clean
ok
merge synced/main (Merging into main...)
Updating d1d9e53..817befc
Fast-forward
(Merging into adjusted branch...)
Updating 7ccc20b..861aa60
Fast-forward
ok
pull drive01
remote: Enumerating objects: 214, done.
remote: Counting objects: 100% (214/214), done.
remote: Compressing objects: 100% (95/95), done.
remote: Total 110 (delta 6), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (110/110), 13.01 KiB 1.44 MiB/s, done.
Resolving deltas: 100% (6/6), completed with 6 local objects.
From /acrypt/no-backup/annexdrive01/testdata
* [new branch] adjusted/main(hidemissing-unlocked) -> drive01/adjusted/main(hidemissing-unlocked)
* [new branch] adjusted/main(unlockpresent) -> drive01/adjusted/main(unlockpresent)
* [new branch] git-annex -> drive01/git-annex
* [new branch] main -> drive01/main
* [new branch] synced/main -> drive01/synced/main
ok
OK! This step is important, because drive01 and drive02 (which we'll set up shortly) won't necessarily be able to reach each other directly, due to not being plugged in simultaneously. Our $METAREPO, however, will know all about where every file is, so that the "wanted" settings can be correctly resolved. Let's see what things look like now:
$ git annex whereis less
whereis mtree (2 copies)
8aed01c5-da30-46c0-8357-1e8a94f67ed6 -- local hub [here]
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
ok
whereis testdata/[redacted] (1 copy)
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
If I scroll down a bit, I'll see the files past the 400MB mark that didn't make it onto drive01. Let's add another example drive! Walkthrough: Adding a second drive The steps for $DRIVE02 are the same as we did before, just with drive02 instead of drive01, so I'll omit listing it all a second time. Now look at this excerpt from whereis:
whereis testdata/[redacted] (1 copy)
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
whereis testdata/[redacted] (1 copy)
c4540343-e3b5-4148-af46-3f612adda506 -- test drive #2 [drive02]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
Look at that! Some files on drive01, some on drive02, some neither place. Perfect! Walkthrough: Updates So I've made some changes in the source directory: moved a file, added another, and deleted one. All of these were copied to drive01 above. How do we handle this? First, we update the metadata repo:
$ cd $METAREPO
$ git annex sync
$ git annex dropunused all
OK, this has scanned $SOURCEDIR and noted changes. Let's see what whereis says:
$ git annex whereis less
...
whereis testdata/cp (0 copies)
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
failed
whereis testdata/file01-unchanged (1 copy)
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
So this looks right. The file I added was a copy of /bin/cp. I moved another file to one named file01-unchanged. Notice that it realized this was a rename and that the data still exists on drive01. Well, let's update drive01.
$ cd $DRIVE01/$REPONAME
$ git annex sync --content
Looking at the testdata/ directory now, I see that file01-unchanged has been renamed, the deleted file is gone, but cp isn't yet here -- probably due to space issues; as it's new, it's undefined whether it or some other file would fill up free space. Let's work along a few more commands.
$ git annex get --auto
$ git annex drop --auto
$ git annex dropunused all
And now, let's make sure metarepo is updated with its state.
$ cd $METAREPO
$ git annex sync
We could do the same for drive02. This is how we would proceed with every update. Walkthrough: Restoration Now, we have bare files at reasonable locations in drive01 and drive02. But, to generate a consistent restore, we need to be able to actually do an export. Otherwise, we may have files with old names, duplicate files, etc. Let's assume that we lost our source and metadata repos and have to restore from scratch. We'll make a new $RESTOREDIR. We'll begin with drive01 since we used it most recently.
$ mv $METAREPO $METAREPO.disabled
$ mv $SOURCEDIR $SOURCEDIR.disabled
$ git clone $DRIVE01/$REPONAME $RESTOREDIR
$ cd $RESTOREDIR
$ git config annex.thin true
$ git annex init "restore"
$ git annex adjust --hide-missing --unlock
Now, we need to connect the drive01 and pull the files from it.
$ git remote add drive01 $DRIVE01/$REPONAME
$ git annex sync --content
Now, repeat with drive02:
$ git remote add drive02 $DRIVE02/$REPONAME
$ git annex sync --content
Now we've got all our content back! Here's what whereis looks like:
whereis testdata/file01-unchanged (3 copies)
3d663d0f-1a69-4943-8eb1-f4fe22dc4349 -- restore [here]
9e48387e-b096-400a-8555-a3caf5b70a64 -- source
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [origin]
ok
...
I was a little surprised that drive01 didn't seem to know what was on drive02. Perhaps that could have been remedied by adding more remotes there? I'm not entirely sure; I'd thought would have been able to do that automatically. Conclusions I think I have demonstrated two things: First, git-annex is indeed an extremely powerful tool. I have only scratched the surface here. The location tracking is a neat feature, and being able to just access the data as plain files if all else fails is nice for future users. Secondly, it is also a complex tool and difficult to get right for this purpose (I think much easier for some other purposes). For someone that doesn't live and breathe git-annex, it can be hard to get right. In fact, I'm not entirely sure I got it right here. Why didn't drive02 know what files were on drive01 and vice-versa? I don't know, and that reflects some kind of misunderstanding on my part about how metadata is synced; perhaps more care needs to be taken in restore, or done in a different order, than I proposed. I initially tried to do a restore by using git annex export to a directory special remote with exporttree=yes, but I couldn't ever get it to actually do anything, and I don't know why. These two cut against each other. On the one hand, the raw accessibility of the data to someone with no computer skills is unmatched. On the other hand, I'm not certain I have the skill to always prepare the discs properly, or to do a proper consistent restore.

31 May 2023

Russell Coker: Links May 2023

Petter Reinholdtsen wrote an interesting blog post about their work on packaging speech to text for Debian [1]. The work of the Debian Deep Learning Team seems really interesting and I look forward to playing with this sort of thing after the release of Bookworm (the packages in question will NOT go in Bookworm but I ll run at least one system on Testing after Bookworm). It would be nice to get more information on the hardware used for running such programs, the minimum hardware needed for real-time speech to text would be interesting to know. Brian Krebs wrote an informative article about attacks involving supply chain compromise and fake LinkedIn profiles [2]. The attacks targetted Linux as well as Windows. Interesting video about the Illium cameras, a bit harsh though, they criticise Illium devices for being too low resolution, too expensive, and taking too much CPU time to process [3]. The Illium cameras still sell for decent prices on eBay, I wonder if it s because of curious people like me who would like to play with them and have money to spare or whether some other interesting things are being done. I wonder how a 4*4 array of the rectangular cameras secured together with duct tape would go. The ideas of Illium should work better if implemented for multi-core CPUs or GPUs. Bruce Schneier with Henry Farrell and Nathan Sanders wrote an insightful blog post about how AT Chatbots could improve democracy [4]. Wired has an interesting article about the way DJI drones transmit the location of the drone operator without encryption by design [5]. Apparently this has been used for targetting attacks on drone operators in Ukraine. This video about robot mice navigating mazes is interesting [6]. But I think it became less interesting when they got to the stage of milliseconds counting for the win, it s very optimised for one case just like F1. I think it would be interesting if they had a rally contest where they go across grass or sand, 3D mazes both in air and water, and contests where Tungsten weights have to be transported. They should push some of the other limits of engineering as completing a maze quickly has been solved. The Guardian has an interesting article about a blood test for sleepy driving [7]. Once they have an objective test they can punish people for it. This github repository listing public APIs is interesting [8]. Lots of fun ideas for phone apps there. Simon Josefsson wrote an insightful blog post about the threat model of security devices [9]. Unfortunately the security of most people is way below the level where this is an issue. But it s good to think about future steps needed for good security. Cory Doctorow wrote an interesting article The Swivel Eyed Loons have a Point [10] about the fact that some of the nuttiest people are protesting about real issues, just in the wrong way.

29 May 2023

Russell Coker: Considering Convergence

What is Convergence In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I d want to use. Hardware for Convergence Comparing a Phone to a Laptop The first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence. The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU did the Purism people do something to make their device faster than most? For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend s N900 (like the one Kyle used) won t complete the Passmark test apparently due to the Extended Instructions (NEON) test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the Compression and CPU Single Threaded tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013! In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today s standards and with the need to drive 4K monitors etc it s not that great. The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting. Software Issues for Convergence Jeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps. The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing). Purism has a blog post by Sebastian Krzyszkowiak about some of the development of the OS to make it work better for convergence and the status of getting it in Debian [8]. The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature. Security When mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen. A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it. The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent. Conclusion I have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone. The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on. To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who s working on similar things will help a lot, especially as he has some low level hardware skills that I lack. Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.

23 May 2023

Craig Small: Devices with cgroup v2

Docker and other container systems by default restrict access to devices on the host. They used to do this with cgroups with the cgroup v1 system, however, the second version of cgroups removed this controller and the man page says:
Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF.
https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rst
That is just awesome, nothing to see here, go look at the BPF documents if you have cgroup v2. With cgroup v1 if you wanted to know what devices were permitted, you just would cat /sys/fs/cgroup/XX/devices.allow and you were done! The kernel documentation is not very helpful, sure its something in BPF and has something to do with the cgroup BPF specifically, but what does that mean? There doesn t seem to be an easy corresponding method to get the same information. So to see what restrictions a docker container has, we will have to:
  1. Find what cgroup the programs running in the container belong to
  2. Find what is the eBPF program ID that is attached to our container cgroup
  3. Dump the eBPF program to a text file
  4. Try to interpret the eBPF syntax
The last step is by far the most difficult.

Finding a container s cgroup All containers have a short ID and a long ID. When you run the docker ps command, you get the short id. To get the long id you can either use the --no-trunc flag or just guess from the short ID. I usually do the second.
$ docker ps 
CONTAINER ID   IMAGE            COMMAND       CREATED          STATUS          PORTS     NAMES
a3c53d8aaec2   debian:minicom   "/bin/bash"   19 minutes ago   Up 19 minutes             inspiring_shannon
So the short ID is a3c53d8aaec2 and the long ID is a big ugly hex string starting with that. I generally just paste the relevant part in the next step and hit tab. For this container the cgroup is /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ Notice that the last directory has docker- then the short ID. If you re not sure of the exact path. The /sys/fs/cgroup is the cgroup v2 mount point which can be found with mount -t cgroup2 and then rest is the actual cgroup name. If you know the process running in the container then the cgroup column in ps will show you.
$ ps -o pid,comm,cgroup 140064
    PID COMMAND         CGROUP
 140064 bash            0::/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope
Either way, you will have your cgroup path.

eBPF programs and cgroups Next we will need to get the eBPF program ID that is attached to our recently found cgroup. To do this, we will need to use the bpftool. One thing that threw me for a long time is when the tool talks about a program or a PROG ID they are talking about the eBPF programs, not your processes! With that out of the way, let s find the prog id.
$ sudo bpftool cgroup list /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/
ID       AttachType      AttachFlags     Name
90       cgroup_device   multi
Our cgroup is attached to eBPF prog with ID of 90 and the type of program is cgroup _device.

Dumping the eBPF program Next, we need to get the actual code that is run every time a process running in the cgroup tries to access a device. The program will take some parameters and will return either a 1 for yes you are allowed or a zero for permission denied. Don t use the file option as it dumps the program in binary format. The text version is hard enough to understand.
sudo bpftool prog dump xlated id 90 > myebpf.txt
Congratulations! You now have the eBPF program in a human-readable (?) format.

Interpreting the eBPF program The eBPF format as dumped is not exactly user friendly. It probably helps to first go and look at an example program to see what is going on. You ll see that the program splits type (lower 4 bytes) and access (higher 4 bytes) and then does comparisons on those values. The eBPF has something similar:
   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)
What we find is that once we get past the first few lines filtering the given value that the comparison lines have:
  • r2 is the device type, 1 is block, 2 is character.
  • r3 is the device access, it s used with r1 for comparisons after masking the relevant bits. mknod, read and write are 1,2 and 3 respectively.
  • r4 is the major number
  • r5 is the minor number
For a even pretty simple setup, you are going to have around 60 lines of eBPF code to look at. Luckily, you ll often find the lines for the command options you added will be near the end, which makes it easier. For example:
  63: (55) if r2 != 0x2 goto pc+4
  64: (55) if r4 != 0x64 goto pc+3
  65: (55) if r5 != 0x2a goto pc+2
  66: (b4) w0 = 1
  67: (95) exit
This is a container using the option --device-cgroup-rule='c 100:42 rwm'. It is checking if r2 (device type) is 2 (char) and r4 (major device number) is 0x64 or 100 and r5 (minor device number) is 0x2a or 42. If any of those are not true, move to the next section, otherwise return with 1 (permit). We have all access modes permitted so it doesn t check for it. The previous example has all permissions for our device with id 100:42, what about if we only want write access with the option --device-cgroup-rule='c 100:42 r'. The resulting eBPF is:
  63: (55) if r2 != 0x2 goto pc+7  
  64: (bc) w1 = w3
  65: (54) w1 &= 2
  66: (5d) if r1 != r3 goto pc+4
  67: (55) if r4 != 0x64 goto pc+3
  68: (55) if r5 != 0x2a goto pc+2
  69: (b4) w0 = 1
  70: (95) exit
The code is almost the same but we are checking that w3 only has the second bit set, which is for reading, effectively checking for X==X&2. It s a cautious approach meaning no access still passes but multiple bits set will fail.

The device option docker run allows you to specify files you want to grant access to your containers with the --device flag. This flag actually does two things. The first is to great the device file in the containers /dev directory, effectively doing a mknod command. The second thing is to adjust the eBPF program. If the device file we specified actually did have a major number of 100 and a minor of 42, the eBPF would look exactly like the above snippets.

What about privileged? So we have used the direct cgroup options here, what does the --privileged flag do? This lets the container have full access to all the devices (if the user running the process is allowed). Like the --device flag, it makes the device files as well, but what does the filtering look like? We still have a cgroup but the eBPF program is greatly simplified, here it is in full:
   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)
   6: (b4) w0 = 1
   7: (95) exit
There is the usual setup lines and then, return 1. Everyone is a winner for all devices and access types!

22 May 2023

Adnan Hodzic: rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform

Considering I ve created my own private cloud in my home as part of: wp-k8s: WordPress on privately hosted Kubernetes cluster (Raspberry Pi 4 + Synology).... The post rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform appeared first on FoolControl: Phear the penguin.

12 May 2023

Dirk Eddelbuettel: crc32c 0.0.2 on CRAN: Build Fixes

A first follow-up to the initial announcement just days ago of the new crc32c package. The package offers cyclical checksum with parity in hardware-accelerated form on (recent enough) intel cpus as well as on arm64. This follow-up was needed because I missed, when switching to a default static library build, that newest compilers would complain if -fPIC was not set. gcc-12 on my box was happy, gcc-13 on recent Fedora as used at CRAN was not. A second error was assuming that saying SystemRequirements: cmake would suffice. But hold on whippersnapper: macOS always has a surprise for you! As described at the end of the appropriate section in Writing R Extensions, on that OS you have to go the basement, open four cupboards, rearrange three shelves and then you get to use it. And then in doing so (from an added configure script) I failed to realize Windows needed a fallback. Gee. The NEWS entry for this (as well the initial release) follows.

Changes in version 0.0.2 (2023-05-11)
  • Explicitly set cmake property for position-independent code
  • Help macOS find its cmake binary as detailed also in WRE
  • Help Windows with a non-conditional Makevars.win pointing at cmake
  • Add more badges to README.md

Changes in version 0.0.1 (2023-05-07)
  • Initial release version and CRAN upload

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

9 May 2023

C.J. Collier: Instructions for installing Proxmox onto the Qotom device

These instructions are for qotom devices Q515P and Q1075GE. You can order one from Amazon or directly from Cherry Ni <export03@qotom.com>. Instructions are for those coming from Windows. Prerequisites: To find your windows network details, run the following command at the command prompt:
netsh interface ip show addresses
Here s my output:
PS C:\Users\cjcol> netsh interface ip show addresses "Wi-Fi"
Configuration for interface "Wi-Fi"
    DHCP enabled:                         Yes
    IP Address:                           172.16.79.53
    Subnet Prefix:                        172.16.79.0/24 (mask 255.255.255.0)
    Default Gateway:                      172.16.79.1
    Gateway Metric:                       0
    InterfaceMetric:                      50
Did you follow the instructions linked above in the prerequisites section? If not, take a moment to do so now.
Open Rufus and select the proxmox iso which you downloaded. You may be warned that Rufus will be acting as dd.
Don t forget to select the USB drive that you want to write the image to. In my example, the device is creatively called NO_LABEL .
You may be warned that re-imaging the USB disk will result in the previous data on the USB disk being lost.
Once the process is complete, the application will indicate that it is complete.
You should now have a USB disk with the Proxmox installer image on it. Place the USB disk into one of the blue, USB-3.0, USB-A slots on the Qotom device so that the system can read the installer image from it at full speed. The Proxmox installer requires a keyboard, video and mouse. Please attach these to the device along with inserting the USB disk you just created. Press the power button on the Qotom device. Press the F11 key repeatedly until you see the AMI BIOS menu. Press F11 a couple more times. You ll be presented with a boot menu. One of the options will launch the Proxmox installer. By trial and error, I found that the correct boot menu option was UEFI OS Once you select the correct option, you will be presented with a menu that looks like this. Select the default option and install. During the install, you will be presented with an option of the block device to install to. I think there s only a single block device in this celeron, but if there are more than one, I prefer the smaller one for the ProxMox OS. I also make a point to limit the size of the root filesystem to 16G. I think it will take up the entire volume group if you don t set a limit. Okay, I ll do another install and select the correct filesystem. If you read this far and want me to add some more screenshots and better instructions, leave a comment.

Next.

Previous.